Goto

Collaborating Authors

 lamda ai


Google fires researcher who claimed LaMDA AI was sentient

Engadget

Blake Lemoine, an engineer who's spent the last seven years with Google, has been fired, reports Alex Kantrowitz of the Big Technology newsletter. The news was allegedly broken by Lemoine himself during a taping of the podcast of the same name, though the episode is not yet public. Google confirmed the firing to Engadget. Lemoine, who most recently was part of Google's Responsible AI project, went to the Washington Post last month with claims that one of company's AI projects had allegedly gained sentience. The AI in question, LaMDA -- short for Language Model for Dialogue Applications -- was publicly unveiled by Google last year as a means for computers to better mimic open-ended conversation.


A Child is Born: Google's LaMDA AI is New to The World

#artificialintelligence

I've been thinking (including wordless thoughts) about what it means that there is not only one, but likely more than one AI that has self-awareness.


Google's LaMDA AI can have a 'natural' conversation while pretending to be Pluto

Engadget

Whether it's talking AI or smarter chatbots, Google has spent the last several years teaching AI how to communicate better with humans. Now, the company is showing off its latest research that could take these efforts to the next level. The company previewed LaMDA ("Language Model for Dialogue Applications"), research it says represents a "breakthrough conversation technology" that will one day enable people to have natural, open-ended conversations about any topic with Google's AI. The technology is still in a research phase, but it could have huge implications for existing Google products like search and Assistant. While existing chatbots are often trained on a specific topic or programmed to give canned responses, LaMDA "can engage in a free-flowing way about a seemingly endless number of topics," according to Google.